25 research outputs found

    Upper and Lower Bounds on the Smoothed Complexity of the Simplex Method

    Full text link
    The simplex method for linear programming is known to be highly efficient in practice, and understanding its performance from a theoretical perspective is an active research topic. The framework of smoothed analysis, first introduced by Spielman and Teng (JACM '04) for this purpose, defines the smoothed complexity of solving a linear program with dd variables and nn constraints as the expected running time when Gaussian noise of variance σ2\sigma^2 is added to the LP data. We prove that the smoothed complexity of the simplex method is O(σ3/2d13/4log7/4n)O(\sigma^{-3/2} d^{13/4}\log^{7/4} n), improving the dependence on 1/σ1/\sigma compared to the previous bound of O(σ2d2logn)O(\sigma^{-2} d^2\sqrt{\log n}). We accomplish this through a new analysis of the \emph{shadow bound}, key to earlier analyses as well. Illustrating the power of our new method, we use our method to prove a nearly tight upper bound on the smoothed complexity of two-dimensional polygons. We also establish the first non-trivial lower bound on the smoothed complexity of the simplex method, proving that the \emph{shadow vertex simplex method} requires at least Ω(min(σ1/2d1/2log1/4d,2d))\Omega \Big(\min \big(\sigma^{-1/2} d^{-1/2}\log^{-1/4} d,2^d \big) \Big) pivot steps with high probability. A key part of our analysis is a new variation on the extended formulation for the regular 2k2^k-gon. We end with a numerical experiment that suggests this analysis could be further improved.Comment: 41 pages, 5 figure

    Geometric aspects of linear programming : shadow paths, central paths, and a cutting plane method

    Get PDF
    Most everyday algorithms are well-understood; predictions made theoretically about them closely match what we observe in practice. This is not the case for all algorithms, and some algorithms are still poorly understood on a theoretical level. This is the case for many algorithms used for solving optimization problems from operations reserach. Solving such optimization problems is essential in many industries and is done every day. One important example of such optimization problems are Linear Programming problems. There are a couple of different algorithms that are popular in practice, among which is one which has been in use for almost 80 years. Nonetheless, our theoretical understanding of these algorithms is limited. This thesis makes progress towards a better understanding of these key algorithms for lineair programming, among which are the simplex method, interior point methods, and cutting plane methods

    Smoothed analysis of the simplex method

    Get PDF
    In this chapter, we give a technical overview of smoothed analyses of the shadow vertex simplex method for linear programming (LP). We first review the properties of the shadow vertex simplex method and its associated geometry. We begin the smoothed analysis discussion with an analysis of the successive shortest path algorithm for the minimum-cost maximum-flow problem under objective perturbations, a classical instantiation of the shadow vertex simplex method. Then we move to general linear programming and give an analysis of a shadow vertex based algorithm for linear programming under Gaussian constraint perturbations

    A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix

    Get PDF
    Following the breakthrough work of Tardos in the bit-complexity model, Vavasis and Ye gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) maxcx,Ax=b,x0,ARm×n\max\, c^\top x,\: Ax = b,\: x \geq 0,\: A \in \mathbb{R}^{m \times n}, Vavasis and Ye developed a primal-dual interior point method using a 'layered least squares' (LLS) step, and showed that O(n3.5log(χˉA+n))O(n^{3.5} \log (\bar{\chi}_A+n)) iterations suffice to solve (LP) exactly, where χˉA\bar{\chi}_A is a condition measure controlling the size of solutions to linear systems related to AA. Monteiro and Tsuchiya, noting that the central path is invariant under rescalings of the columns of AA and cc, asked whether there exists an LP algorithm depending instead on the measure χˉA\bar{\chi}^*_A, defined as the minimum χˉAD\bar{\chi}_{AD} value achievable by a column rescaling ADAD of AA, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m2n2+n3)O(m^2 n^2 + n^3) time algorithm which works on the linear matroid of AA to compute a nearly optimal diagonal rescaling DD satisfying χˉADn(χˉ)3\bar{\chi}_{AD} \leq n(\bar{\chi}^*)^3. This algorithm also allows us to approximate the value of χˉA\bar{\chi}_A up to a factor n(χˉ)2n (\bar{\chi}^*)^2. As our second main contribution, we develop a scaling invariant LLS algorithm, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n2.5lognlog(χˉA+n))O(n^{2.5} \log n\log (\bar{\chi}^*_A+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/lognn/\log n improvement on the iteration complexity bound of the original Vavasis-Ye algorithm

    A simple method for convex optimization in the oracle model

    Get PDF
    We give a simple and natural method for computing approximately optimal solutions for minimizing a convex function f over a convex set K given by a separation oracle. Our method utilizes the Frank–Wolfe algorithm over the cone of valid inequalities of K and subgradients of f. Under the assumption that f is L-Lipschitz and that K contains a ball of radius r and is contained inside the origin centered ball of radius R, using O((RL)2ε2·R2r2) iterations and calls to the oracle, our main method outputs a point x∈ K satisfying f(x) ≤ ε+ min z∈Kf(z) . Our algorithm is easy to implement, and we believe it can serve as a useful alternative to existing cutting plane methods. As evidence towards this, we show that it compares favorably in terms of iteration counts to the standard LP based cutting plane method and the analytic center cutting plane method, on a testbed of combinatorial, semidefinite and machine learning instances

    A nearly optimal randomized algorithm for explorable heap selection

    Get PDF
    Explorable heap selection is the problem of selecting the nth smallest value in a binary heap. The key values can only be accessed by traversing through the underlying infinite binary tree, and the complexity of the algorithm is measured by the total distance traveled in the tree (each edge has unit cost). This problem was originally proposed as a model to study search strategies for the branch-and-bound algorithm with storage restrictions by Karp, Saks and Widgerson (FOCS '86), who gave deterministic and randomized n⋅exp(O(logn−−−−√)) time algorithms using O(log(n)2.5) and O(logn−−−−√) space respectively. We present a new randomized algorithm with running time O(nlog(n)3) using O(logn) space, substantially improving the previous best randomized running time at the expense of slightly increased space usage. We also show an Ω(log(n)n/log(log(n))) for any algorithm that solves the problem in the same amount of space, indicating that our algorithm is nearly optimal

    A simple method for convex optimization in the oracle model

    Get PDF
    We give a simple and natural method for computing approximately optimal solutions for minimizing a convex function f over a convex set K given by a separation oracle. Our method utilizes the Frank–Wolfe algorithm over the cone of valid inequalities of K and subgradients of f . Under the assumption that f is L-Lipschitz and that K contains a ball of radius r and is contained inside the origin centered ball of radius R, using O( (RL)^2/ε^2 · R^2/r^2 ) iterations and calls to the oracle, our main method outputs a point x ∈ K satisfying f (x) ≤ ε + minz∈K f (z). Our algorithm is easy to implement, and we believe it can serve as a useful alternative to existing cutting plane methods. As evidence towards this, we show that it compares favorably in terms of iteration counts to the standard LP based cutting plane method and the analytic center cutting plane method, on a testbed of combinatorial, semidefinite and machine learning instance

    Asymptotic bounds on the combinatorial diameter of random polytopes

    Get PDF
    The combinatorial diameter diam(P)\operatorname{diam}(P) of a polytope PP is the maximum shortest path distance between any pair of vertices. In this paper, we provide upper and lower bounds on the combinatorial diameter of a random "spherical" polytope, which is tight to within one factor of dimension when the number of inequalities is large compared to the dimension. More precisely, for an nn-dimensional polytope PP defined by the intersection of mm i.i.d.\ half-spaces whose normals are chosen uniformly from the sphere, we show that diam(P)\operatorname{diam}(P) is Ω(nm1n1)\Omega(n m^{\frac{1}{n-1}}) and O(n2m1n1+n54n)O(n^2 m^{\frac{1}{n-1}} + n^5 4^n) with high probability when m2Ω(n)m \geq 2^{\Omega(n)}. For the upper bound, we first prove that the number of vertices in any fixed two dimensional projection sharply concentrates around its expectation when mm is large, where we rely on the Θ(n2m1n1)\Theta(n^2 m^{\frac{1}{n-1}}) bound on the expectation due to Borgwardt [Math. Oper. Res., 1999]. To obtain the diameter upper bound, we stitch these ``shadows paths'' together over a suitable net using worst-case diameter bounds to connect vertices to the nearest shadow. For the lower bound, we first reduce to lower bounding the diameter of the dual polytope PP^\circ, corresponding to a random convex hull, by showing the relation diam(P)(n1)(diam(P)2)\operatorname{diam}(P) \geq (n-1)(\operatorname{diam}(P^\circ)-2). We then prove that the shortest path between any ``nearly'' antipodal pair vertices of PP^\circ has length Ω(m1n1)\Omega(m^{\frac{1}{n-1}})
    corecore